perfect information game
- Europe > Italy (0.04)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Overview (0.46)
- Research Report > New Finding (0.46)
- Leisure & Entertainment > Games > Chess (0.50)
- Leisure & Entertainment > Games > Backgammon (0.47)
- Leisure & Entertainment > Games > Go (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Games (1.00)
- Europe > Italy (0.04)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Overview (0.46)
- Research Report > New Finding (0.46)
- Leisure & Entertainment > Games > Chess (0.50)
- Leisure & Entertainment > Games > Backgammon (0.47)
- Leisure & Entertainment > Games > Go (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning
Koyamada, Sotetsu, Okano, Shinri, Nishimori, Soichiro, Murata, Yu, Habara, Keigo, Kita, Haruka, Ishii, Shin
We propose Pgx, a suite of board game reinforcement learning (RL) environments written in JAX and optimized for GPU/TPU accelerators. By leveraging JAX's auto-vectorization and parallelization over accelerators, Pgx can efficiently scale to thousands of simultaneous simulations over accelerators. In our experiments on a DGX-A100 workstation, we discovered that Pgx can simulate RL environments 10-100x faster than existing implementations available in Python. Pgx includes RL environments commonly used as benchmarks in RL research, such as backgammon, chess, shogi, and Go. Additionally, Pgx offers miniature game sets and baseline models to facilitate rapid research cycles. We demonstrate the efficient training of the Gumbel AlphaZero algorithm with Pgx environments. Overall, Pgx provides high-performance environment simulators for researchers to accelerate their RL experiments. Pgx is available at https://github.com/sotetsuk/pgx.
- Europe > Italy (0.04)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Leisure & Entertainment > Games > Backgammon (0.67)
- Leisure & Entertainment > Games > Chess (0.52)
- Leisure & Entertainment > Games > Go (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Games (1.00)
An Introduction to Monte Carlo Tree Search
We recently witnessed one of the biggest game AI events in history – Alpha Go became the first computer program to beat the world champion in a game of Go. The publication can be found here. Different techniques from machine learning and tree search have been combined by developers from DeepMind to ...
An introduction to Monte Carlo Tree Search
We recently witnessed one of the biggest game AI events in history – Alpha Go became the first computer program to beat the world champion in a game of Go. The publication can be found here. Different techniques from machine learning and tree search have been combined by developers from DeepMind to achieve this result. One of them is the Monte Carlo Tree Search (MCTS) algorithm. This algorithm is fairly simple to understand and, interestingly, has applications outside of game AI.
Poker Is Harder for AI to Master Than Chess. AI has Now Learned to Bluff and Beat Humans.
The list of recent defeats where humans were overmatched by machines are well-known: chess champion Garry Kasparov losing against IBM's Deep Blue, Jeopardy wiz Ken Jennings being soundly defeated by IBM's Watson, and Go champion Lee Sodol losing to Google's AlphaGo. We may also be able to add poker to the list of AI superiority. A recent twenty-day competition between poker champions (heads-up no-limit Texas hold'em, 120,000 total hands) and Libratus, an AI program created by Carnegie Mellow University professors Tuomas Sandholm and Noam Brown, had the AI coming out on top. This is particularly surprising because unlike games like chess and Go, where the information is upfront and know ("Perfect Information Games"), poker involves a great deal of hidden information ("Imperfect Information Games") and the seemingly-human characteristic of bluffing. It turns out that AI can learn the art of bluffing.
Artificial intelligence goes deep to beat humans at poker
Machines are finally getting the best of humans at poker. Two artificial intelligence (AI) programs have finally proven they "know when to hold'em, and when to fold'em," recently beating human professional card players for the first time at the popular poker game of Texas Hold'em. And this week the team behind one of those AIs, known as DeepStack, has divulged some of the secrets to its success--a triumph that could one day lead to AIs that perform tasks ranging from from beefing up airline security to simplifying business negotiations. AIs have long dominated games such as chess, and last year one conquered Go, but they have made relatively lousy poker players. In DeepStack researchers have broken their poker losing streak by combining new algorithms and deep machine learning, a form of computer science that in some ways mimics the human brain, allowing machines to teach themselves.
- North America > United States > Texas (0.29)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > New York (0.05)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.05)
- Leisure & Entertainment > Games > Poker (0.61)
- Leisure & Entertainment > Games > Chess (0.38)
Computer cashes in big at Texas Hold 'Em tourney
One of the proving grounds for artificial intelligence is games. Classic games have a fixed set of rules, and these make it easier for researchers to develop new techniques and algorithms that enable computers to play (and hopefully win) various games. Tic-tac-toe, checkers, and chess are all games where researchers have developed software that is capable of winning or drawing when paired off against the best human players in the world. Last weekend, researchers at the University of Alberta added another classic game to this list: poker. In a series of matches that took place over the Fourth of July weekend in Las Vegas, the researchers' Polaris poker program won against a group of top-ranked online poker players.
- North America > Canada > Alberta (0.56)
- North America > United States > Texas (0.43)
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
Google's DeepMind to use 'messy' world of StarCraft for AI research
Google's DeepMind is teaming up with Blizzard Entertainment Inc. to open up the world of the game StarCraft II to artificial intelligence researchers. DeepMind Staff Research Scientist Oriol Vinyals (above) announced the new partnership today during Blizzcon, Blizzard's annual convention held in Anaheim, Calif. According to Vinyals, who is himself a longtime StarCraft player, Blizzard will be releasing a StarCraft II application programming interface early next year that will allow researchers to build and train AI agents to play the game. "For StarCraft players like myself, advances in AI could deal some drastic benefits," Vialys said. "For example, we might see more interesting AI opponents for a variety of skill levels or AI coaches that can help players improve. And there's still a long way to go, but maybe we'll even see an agent take on the Blizzcon champion in a show match."